home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Graphics Plus
/
Graphics Plus.iso
/
msdos
/
fractal
/
iterat31
/
function.txt
< prev
next >
Wrap
Text File
|
1993-11-17
|
50KB
|
967 lines
A few comments about the functions in Iterate!
══════════════════════════════════════════════════════════════════════════════
Function A. x=c*x+a*cos(y) y=d*y+b*sin(x+a*cos(y))
──────────────────────────────────────────────────────────────────────────────
This was a mistake. I was trying to make Function B. (If you study the two,
you'll see what happened.) It turned out to be a rather fortunate mistake,
though, as this function is more interesting than Function B. I used this
function in my composition "Chaotic Suite for Computer and Synthesizer".
Parameters
──────────
On this function, I usually set a=b and c=d=1. You can try a=b=1, then
gradually increase this value to a=b=2, a=b=2.5 and so on. You will see the
function gradually break up and become more chaotic.
You can also experiment with setting 'a' and 'b' to different values. This
gives an interesting effect.
If you change 'c' and 'd', you will probably want to keep them close to 1,
i.e., c=.99, d=1.
Function B. x=c*x+a*cos(y) y=d*y+b*sin(x)
──────────────────────────────────────────────────────────────────────────────
Close to Function A. In both Functions A and B you'll notice that the entire
plane is tiled over with repetitive figures. This is of course because of the
periodic nature of the trig functions.
Parameters
──────────
Try the same sorts of parameters as Function A.
Function C. x=cos(mu)*x-sin(mu)*(y-x²) y=sin(mu)*x+cos(mu)*(y-x²)
──────────────────────────────────────────────────────────────────────────────
This function is due to Henon (one of the first mathematicians to put much
study into the area we now call chaos theory). I found it in a fascinating
book by Ekeland. Unfortunately, I had to give the book back to the library,
and now I live 1000 miles away from the library, and I can't for the life of
me remember the title.
It is quite an interesting little book, though, if you can find it anywhere.
Ekeland discussed this function and had a graph of it. I tried to duplicate
the graph on my computer, and so Iterate! was born.
This function is area preserving, which simply means that if you take any
region and put it through the function, the resulting region has the same area
as the one you started with. Symbolically, you would say the for a region R
and a function F, the area of R = the area of F(R).
The fact that a function is area preserving has certain ramifications for the
physical interpretation of the function. For instance, imagine a planet
revolving around the sun. Imagine a plane which cuts through the planet's
orbit at right angles to the orbit. Imagine that each time the planet comes
around the sun, you mark the spot where its center of gravity passes through
the plane.
(If this explanation was too complicated for you, just think of making a dot
on the spot where the earth is at 12 midnight on December 31st. Make a new dot
every year on December 31st.)
What have we got? An iterated function, of course. You can think
of the planet as being the "function", and each orbit of the planet is one
iteration of the function, which produces one dot on the plane.
Now if the planet is the only planet in the whole universe, and if it happens
to be orbiting the sun in a perfectly circular orbit, we're going to have a
pretty darn boring graph. Each time the planet comes around, it's just going
to hit the very same spot again. So it is like a fixed point. It doesn't go
anywhere and it's not very exciting.
But let's suppose instead that there are a few other planets in our solar
system, and maybe our planet has a moon or two orbiting it. Now all of a
sudden our perfectly circular orbit isn't quite so circular any more. In
fact, our orbit is going to have quite a few interesting little perturbations
to it, and it definitely isn't going to hit the same spot on our plane each
time around. It is going to make a fascinating pattern of dots on the plane--
something, as a matter of fact, like what you'll see if you run Function C.
Because that's how Function C was made--imagining this kind of orbit, and how
it would hit the plane, and then just figuring out the equations for the dots,
so you don't have to think about all that gosh darned orbiting every time. In
fact, the parameter mu in this function is some kind of orbital incidence
parameter or something. Find Ekeland, ask him, and then we'll both know.
At any rate, back to area preserving. It turns out that any function that is
derived this way--by thinking about orbits hitting a plane and so on--has this
area preserving property. And (as I recall) the opposite is true, too--if you
have an area preserving function, then you can find an orbit that fits it
perfectly.
. . .
Here's another interesting point about this function. Try this function with
mu=77 degrees, and you will notice that it makes a fractal pattern (see
Iterate!.txt for more details). What happens is that there is a large central
circle with 5 smaller circles surrounding it. Each of the smaller circles
is surrounded by 5 even smaller circles. These even smaller circles are
surrounded by yet smaller circles, and so on ad infinitum.
(Actually, if you try this out, you will find that it is not always 5 circles
surrounding, but maybe 3,4,5,6, or whatever. But for the sake of simplicity,
let's just say it's 5 each time.)
Here's a little schematic of sort of what this looks like (this time with
4 "circles" surrounding each larger "circle"--but that's the limitations
of ASCII):
O O O O
┌─┐ ┌─┐
│ │ │ │
└─┘ └─┘
O O O O
┌────┐
│ │
│ │
│ │
└────┘
O O O O
┌─┐ ┌─┐
│ │ │ │
└─┘ └─┘
O O O O
Now here's the point: Let's suppose it takes 100 iterations to draw the
middle (largest) circle satisfactorily. Then it takes 5 times as many
iterations to draw the five surrounding circles to the same degree of
detail (if you try this function, you'll soon see why--it's because one
point draws the entire middle circle, and another point draws all five
surrounding circles, so it's spread 5 times as thin and requires 5 times
the iterations to look as good).
If we go down another level to the next smaller circles, you will again have
to 5 times the iterations to get the same degree of detail. So now we're to
5 x 5 x 100 iterations = 2500 iterations. Going down another level
multiplies by 5 again, and we have 12,500 iterations. Going to a fifth
level requires 5 x 12,500 = 62,500 iterations. For the sixth level,
5 x 62,500 = 312,500 iterations. For the seventh level,
5 x 312,500 = 1,562,500. Skipping a few levels (I assume you can
multiply by 5 on your own), we find the the 20th level requires about
2,000,000,000,000,000 iterations. (That's two quadrillion, just in case
your wondering. We know that this is a very, very, VERY large number,
because--at this writing--it is actually larger than the amount of the
national debt. Now that's amazing.)
On my computer--a 386-40--it takes 7.24 seconds to do 20,000 iterations of
this function. So how long would it take to to 2,000,000,000,000,000
iterations? Approximately 23,000 years.
But let's forget about my clunky old computer. Let's imagine that we don't
want to wait 23,000 years, and so we enlist the services of the world's
fastest supercomputer. Fantasizing a bit (as computer nerds are wont to do),
let's imagine that the world's fastest supercomputer is a trillion times
faster than my 386. So how long is it going to take the world's fastest
supercomputer to do these 2 quadrillion iterations? A bit of calculation
reveals--a mere .724 seconds.
So there we are. With the world's largest supercomputer, we've conquered
iteration. We can look all the way down to the 20th level of our function
in a flash. And surely there's no need to look on past the 20th level . . .
But let's suppose we DO want to look lower than the 20th level. After all,
there are hundreds--thousands--millions--of levels. This fractal pattern of
circles surrounding ever smaller circles goes on forever. The 20th level barely
scratches the surface. Let's suppose we want to go just a bit deeper, down
to the 40th level. We set our spiffy supercomputer to calculating, step out
for a little snack while it does its stuff, step back in the room and
discover that . . . there are a mere 2,000,000 YEARS left before the
calculation is complete.
And that's just for one point. We'll need a good 30 or 40 points to make
a decent looking picture. Might as well settle down for the long haul . . .
On the other hand, it's fascinating to think that, by understanding the
fractal structure of the graph, in a few moments the human mind can
grasp the essential idea of the function down to the 20th, the 40th, or even
the 4,000,000th level.
HUMANS SUPERCOMPUTERS
────── ──────────────
1 0
(Cannon blast; Marching band blares the HUMANS' School Song, stage right;
SUPERCOMPUTERS, stage left, overheard muttering "$385f:a348 af 3e 42 2d")
This isn't an isolated phenomenon, by the way. You'll notice it all the time
when you work with Iterate!. The further you zoom in on a graph, the more
iterations it takes to make it look good. The more iterations you do, the
longer it takes. No matter how fast your computer is, you soon find yourself
with graphs that would take an hour, a day, or even longer to complete.
Parameters
──────────
The only parameter for this function is 'mu', which is an angle and is
measured in degrees. If you set mu=77 you will see the five circles described
above.
It is fascinating to explore this function as 'mu' changes from 10 degrees
(where the function makes sort of an egg shape) all the way through 360
degrees and back to 10 degrees.
Function D. x=a-b*y-x² y=x (the Henon Map)
──────────────────────────────────────────────────────────────────────────────
This is the famous Henon Map. It has been studied intensely by chaotic
mathematicians (and I'm not kidding). This simple map displays an astounding
variety of behaviors--most of the behaviors, in fact, that have been
discovered in the study of chaos theory.
One interesting fact about the Henon Map is that in a certain region, with
certain parameters, the Henon map contains a duplicate of the Horseshoe Map
(Function L). (A topological duplicate, that is--meaning that the two share
all essential qualities, although things may be deformed so that superficially
they don't look all that similar.)
Many books on chaos theory spend a good deal of time exploring this function
(so if you want to find out more about it, just look one of them up).
Parameters
──────────
This function is a good one to use the <M> command, rather than just <Space>.
A lot of graphs that don't look to interesting when you use <Space> suddenly
become quite interesting when you trace the orbits with <M>.
One interesting thing to do with this function is to leave one parameter
constant while you gradually change the other. For instance, leave a=1 as you
gradually change 'b' from 1 to -1. You will find many bifurcation
points--attracting fixed points will split into a period three attracting
point, and so on.
Here are some interesting spots:
Parameters What you'll see
────────── ────────────────────────────────────────────────────────────
a=1 b=.9 An attracting fixed point surrounded by a period three
attracting point. In between the fixed point and the period
three point is a period three repelling point.
Use the <M> command to find an attracting point. Then as
you're holding <M>, try moving the mouse to "perturb" the
point. When you do this, does the cursor stay near your
original attracting point, or does it move towards the other
one? You will find that one of the attracting points here is
much more stable than the other.
a=1 b=-0.45 Somewhere in this area is where you'll see the duplicate of the
a=1 b=-0.55 horseshoe map. It is fascinating to try out small changes
near these parameters (b=-0.46, b=-0.44 etc.) to see how this
changes the graph. At certain parameters, there will no longer be
be any periodic attracting points, but there will instead be a
"Strange Attractor" (an entire fractal-shaped area that
attracts all nearby points).
Function E. z=z²+c, c=a+b*i (on the complex plane)
──────────────────────────────────────────────────────────────────────────────
This is the function that generates the Mandelbrot Set and also the classic
Julia Sets.
This function is a function on the complex plane. 'Z' is a complex variable
(representing a point on the plane), as is 'c'. That is to say, 'c' is not
our usual parameter 'c' for which you can enter a value on the parameter
screen. Actually, c=a+bi, where i=sqrt(-1), and 'a' and 'b' are our usual
parameters.
Iterate! is a powerful tool for exploring the behavior of Julia Sets and the
relationship between the Mandelbrot Set and Julia Sets. If you have only seen
static pictures of Julia Sets before, you will be amazed at the dynamics that
underlie the graphs (as though the static pictures weren't amazing enough
already!).
One of the most fascinating things you can do with Function E is to draw the
Julia Set for the function, and then explore the different areas of the Julia
Set using the <M> and <Space> commands. Here is how to do this:
1. Choose Function E using the <F> command. Enter the parameters you
want as 'a' and 'b'.
2. Draw the Julia Set using the <J> command. (The <J> command uses
Function F to draw the Julia Set; see the notes below to see how
it does it.)
3. Use the <H> command to hide the graphics help line (this makes the
<M> command work much better).
4. Move the cursor to various places in the Julia Set and use the <M>
command to trace the iterates.
You can also use <Space> to iterate in the usual way, but the <M> command
usually gives you a better idea of what's going on. One good technique is to
first use <Space> to get a quick preview, and then (without moving the cursor)
repeatedly press <M> to get a detailed look at what happened.
If you explore the Julia Set in this way, you will soon discover that the
large and small pools of the Julia Set correspond to certain behaviors in
Iterate!. Here is a schematic of a possible Julia Set:
┌──┬────┬──────────┬───────────────────────┐
│G │ │ │ │
└──┤ E │ │ │
└────┤ │ │
│ C │ │
│ │ │
│ │ │
└──────────┤ A ├─────────┐
│ │ │
│ │ │
│ │ │
│ │ B ├────┐
│ │ │ ├──┐
│ │ │ D │F │
└───────────────────────┴─────────┴────┴──┘
In this Julia Set, any point in Pool F immediately jumps to Pool E. A point
in Pool E jumps to Pool B. A point in Pool B jumps to Pool A. In the middle of
Pool A is an attracting fixed point. All the points in Pool A spiral inwards
to this attracting fixed point.
A point starting in Pool G has this sequence of Pools:
G D C A
Once the point lands in Pool A, it of course spirals inward to the attracting
fixed point.
So every point in the Julia Set ends up in Pool A, spiralling towards the
attracting fixed point. However, before the point gets to Pool A, it may
travel a rather tortuous and circituous route. In a real Julia Set, it will
be even more tortuous and circituous than what I've described here, because a
real Julia Set has many more (many more, that is if you consider infinity to
be many more than seven) pools than our schematic. Each of of our pools above
would be surrounded in fractal fashion by pools upon pools upon pools. Each
of these pools, no matter how small, will send its points to a certain larger
pool, and this larger pool will in turn send it to a larger pool, until
finally it arrives in Pool A. It is possible to find points that hit a
hundred, a thousand, or a million smaller pools before finally landing in Pool
A.
This, by the way, is only the least complicated type of organization you will
find in Julia Sets. Most Julia Sets have pools with attracting or repelling
periodic points of order two or higher, as well as other complicated dynamics.
Parameters
──────────
Here are a few parameters you can try.
Function Parameters What you'll see
──────── ────────── ───────────────
E a=-0.1 Attracting fixed point
b= 0
E a=-0.982 Two repelling fixed points
b= 0.2838
E a=-.7627837 Three spirals
b= .1498945
E a=-.487182 An attracting point of period 5,
b= .532951 surrounding a repelling fixed point
E a=-1.1369 An attracting point of period 6
b= 0.23925
E a=-0.712515 An attracting point of period 13
b= 0.237683
E a=-.996592 Attracting point of order 16
b= .284015
E a=-.9964357 Attracting point of order 32 surrounding repelling
b=.28822655 points of period 16, 4, 2, and 1
E a=-1.170081 A period 105 (?) point, surrounding repelling
b= .2441261 points of various periods
E a= 0.60478 An attracting point of period 125, surrounding
b= 0.60189 a repelling point of period 25, surrounding
another repelling point of period 5, surrounding
a repelling fixed point
E a=-.6033155 A totally disconnected Julia Set--that is, no
b=-.8685983 point of the set is touching another point of the
set
E a= .5 Ditto
b= .2
Another fun thing to do if you have the Mandelbrot-Julia Generator is to draw
a Julia Set there, then run Iterate! and import the Julia Set using the <R>
command. Then investigate the behavior of the various parts of the Julia Set
using <M> and <Space>. This is similar to what I described above with the <J>
command. The difference is that the Mandelbrot-Julia Generator will usually
draw a much more detailed Julia Set than the <J> command.
* * *
It is rather frustrating to try and find good parameters for Function E by
entering them by hand. Luckily, there is an easier way.
The basis of this method is the fact that there is a correlation between
points on the Mandelbrot Set and the parameters of a Julia Set. This is one
of the facts that Benoit Mandelbrot discovered about the Mandelbrot Set (which
is of course named after him), and it has been rather thoroughly investigated
by later mathematicians. Check some of the references in "Iterate!.txt" if
you want to know more. --On second thought, strike that. Using Iterate!,
you can easily re-discover what these mathematicians discovered, and it might
be more fun that way. Here's how you do it:
1. Use the <F> command to choose Function E.
2. Use the <R> command to load "Mandelbr.gph".
3. Move the cursor to any spot in or near the Mandelbrot Set.
4. Press <Z> to load these coordinates as the parameters.
5. Press <J> to draw the Julia Set associated with these parameters.
6. Use <Space> and <M> to investigate the properties of Function E with
these parameters.
(Note that you won't be able to load "Mandelbr.gph" unless you have an EGA or
VGA adaptor and are running in EGA mode or VGA1 mode. If you don't have EGA
or VGA, or if you would like to run in a different video mode, you should
consider registering so that you will have my Mandelbrot-Julia generator that
will draw the Mandelbrot Set--or any part of it--in any graphics mode.)
The <Z> command is at the heart of this technic. What is does is load the
(x,y) coordinates of the point in the Mandelbrot Set into parameters 'a' and
'b' of the function. (It also gives you a chance to change the other
parameters, and resets the window to a good value for viewing an entire Julia
Set).
Here's a rough schematic of the Mandelbrot Set:
D
┌┐
┌┴┴┐
│ │
│B │
┌───┴ ┴───┐
│ └┐
│ └┐
│ │
H ┌────┤ │
┌┐ ┌──┤ │ ┌┘
──┤├──┤G F A ─┤
└┘ └──┤ │ └┐
└────┤ │
│ ┌┘
│ ┌┘
└───┬ ┬───┘
│C │
│ │
└┬┬┘
└┘
E
The first thing you will notice is that points inside the Mandelbrot Set
correspond to Julia Sets with an inside--that is the Julia Sets are "fat";
they have thickness. Points outside the Mandelbrot Set correspond to Julia
Sets that have no insides--they're just a bunch of isolated dots.
(This in fact is how Benoit defined the Mandelbrot Set--if the corresponding
Julia Set has an interior, then the point is in the Mandelbrot Set. If you
corresponding Julia Set is just a bunch of disconnected points, then the point
isn't in the Mandelbrot Set. Once Mandelbrot had defined his set, he was
anxious to draw it. This would have been impossible, except for one fact
about Julia Sets: If the point (0,0) goes to infinity under iteration, the
Julia Set has no interior. If (0,0) doesn't go to infinity, the point has
an interior. This fact provides an easy, though time consuming, way to draw
the Mandelbrot Set, and is the basis of all Mandelbrot drawing programs today.)
Another thing you might observe is that the Julia Set resembles the area of
the Mandelbrot Set that its parameters came from. For instance, if you pick a
point in the middle of Pool A--a large, open pool--the corresponding Julia Set
will be a large, open pool. If you pick the point from a smaller pool, the
Julia Set will consist of smaller pools. If you pick the point from an area
of the Mandelbrot Set that has seven-armed spirals, then your Julia Set will
have seven-armed spirals.
It goes much deeper than this, though. Each of the pools--whether large or
small--corresponds to definite dynamics in the Julia Set. For instance, Julia
Sets selected from Pool A all have attracting fixed points. Julia Sets from
Pool B have an attracting point of period 3 surrounding a repelling fixed
point. Julia Sets from Pool F have an attracting point of period 2
surrounding a repelling fixed point. The dynamics of Pool C are just like
those of Pool B, except that the Julia Sets are mirrors images of each other
(the same is true of all pairs of points symmetrical about the x axis).
The dynamics resulting from the different pools can be rather neatly
catalogued (I again invite you to consult the literature on this point). If
you explore the different pools and their associated Julia Sets, you will soon
see why. Here is just a little hint of the sort of thing you can look for:
Observe what happens when you move from one pool to its largest attached pool.
For instance, points from Pool A have an attracting fixed point. Points from
Pool F (the largest pool attached to Pool A) have a period 2 attracting point.
In Pool B, we have a period 3 attracting point. When we move to Pool D (the
largest pool attached to Pool B), we have period 6 point.
This same rule holds true for all pools in the Mandelbrot Set: Whenever you
move from a pool to its largest attached pool, you double the period of the
attracting point. For instance:
Moving From To Changes To
─────────── ── ──────── ────────
A F Period 1 Period 2
F G Period 2 Period 4
C E Period 3 Period 6
B D Period 3 Period 6
You will discover that this sort of relationship holds true for all the pools
on the Mandelbrot Set. For instance, going to the third large pool on the
right will always do a certain thing--multiply the period of the attracting
point by 3, or 7 or 92 or something like that.
This gives an way to find attracting points of various periods. For instance,
going to the large Pool on the right (i.e., moving from A to B) multiplies the
attracting point by three. So if we go from A to B, and then go to the large
pool on the right of B (not shown on the schematic), and then go to the large
pool on the right of this pool, guess what kind of an attracting point we
find? An attracting point of period 1x3x3x3=27. Going again to the large
pool on the right gives us an attracting point of period 1x3x3x3x3=81.
And so on.
(Time for another advertisement: This is a good reason to REGISTER your copy
of Iterate!, because you will then receive a copy of the Mandelbrot-Julia
generator, which will allow you to zoom in on these small areas of the
Mandelbrot Set and read the parameters right off them. These highly magnified
areas often give Julia Sets that are much more interesting than the ones you
can get off the large graph of the Mandelbrot Set.)
* * *
An interesting area of the Mandelbrot Set to investigate is the area between
the pools--the "channels" between the different "bays", so to speak. Let's
think of the channel between Pool A and Pool F.
What happens here? This is where our attracting fixed point changes to an
attracting point of period 2. So the area around this change (the "channel")
is bound to be interesting--sort of like the area around a nuclear explosion,
where one element changes into another.
This point of change from fixed point to period 2 point is called a
"bifurcation point". And what happens at this bifurcation point is actually a
little more complicated than what I just said. What really happens is that at
the bifurcation point, the fixed point changes from being attracting to
repelling. At the same time, a period 2 attracting point appears. Initially,
the period 2 point is right on top of the fixed point, but as we slowly move
into Pool F, the period 2 point slowly moves away from the fixed point.
If you try drawing Julia Sets made from points on either side of the Channel
between Pools A and F, you'll soon see what I mean.
If we were to draw a schematic of this, it might look like:
┌────────attracting period 2 point
┌┘
attracting ┌┘
fixed point ┌┘
─────────────────*───────────repelling fixed point
└┐
└┐
└┐
└────────attracting period 2 point
* represents the bifurcation point
(This represents the graph we would draw if we plotted the coordinates of the
fixed point, then changed the parameters a little, plotted the fixed point
again, changed the parameters again, plotted the fixed point again, and so on.
It is called--amazingly enough--a bifurcation diagram.)
You'll notice that there is some symmetry here--some "conservation laws", so
to speak. For instance, there is balance between attracting and repelling.
The fixed point must become repelling, so that the period 2 points can be
attracting. There is symmetry in the fact that the period 2 points appear
balanced on opposite sides of the fixed point.
There is in fact a strong correlation here between this "conservation" and the
conservation laws that hold in the physics of subatomic particles. For
instance, when an electron decays into a photon and some other particles
(sorry about being so vague, but I haven't really been keeping up with my
subatomic physics lately), there are certain properties that must be
conserved: charge, momentum, mass-energy, and so on. These conservation laws
actually determine the types of subatomic reactions that can occur--and the
same type of thing holds true for our bifurcation point in the Mandelbrot Set.
In fact, if you look at diagrams of particle reactions, or traces of subatomic
particles in a cloud chamber or bubble chamber, you will see something that
looks strikingly similar to our bifurcation diagram.
(BTW, there is also a fascinating correlation between the Rubik's Cube and the
actions of subatomic particles. 50¢ to the first person who can find a
correlation between the Rubik's Cube and the Mandelbrot Set. P.S. "They're
both good for wasting a lot of time" is not an acceptable answer.)
* * *
The reason all this works the way it does has to do with what Julia Sets are
and how the Mandelbrot Set is defined. Unfortunately, getting into that can
of worms is beyond the scope of this little help file. I invite you to peruse
some of the literature if you're curious.
Function F. z=±sqrt(z-c) c=a+b*i (inverse of Function E)
──────────────────────────────────────────────────────────────────────────────
This function is the inverse of function E. This again is a fascinating
function because of it's relationship to Julia Sets and the Mandelbrot set.
It turns out that the border of Julia Sets (generated by Function E) is
repelling under iteration. That is, points fly away from it, either towards
some attracting point inside the Julia Set, or towards infinity if the point
is outside the set. (This fact forms the basis of computer graphs of the
Julia Set, by the way.)
(In most computer drawings of Julia Sets, the inside of the Julia Set is shown
as black, and the outside is colored. So the border of the Julia Set is the
border between the black and the color in the computer graph.)
So, if you take the inverse of this function, what happens? The border, which
was repelling, suddenly becomes attracting. All points on the plane are
attracted to the border, and once they're on the border, they stay there.
This gives us a handy way to draw the border of the Julia Set: Just iterate a
point under this inverse function. The point will bounce back and forth along
the border and draw it very nicely.
This is what you will see when you run Function F.
Unfortunately, in real life this method of drawing Julia Sets doesn't work
quite as well as we would like it to, because the point is more likely to hit
some parts of the border than others. Thus some parts of the border are very
dark, while others are faint.
Actually, the point will hit everywhere on the border of the Julia Set (or at
least very close to everywhere) sooner or later (in math lingo you would say,
"The orbit of the point is everywhere dense on the border of the Julia Set").
The trouble is, for some Julia Sets, the point hits certain areas of the border
very soon, and other areas much, much later.
You see, we're back to the 23,000 year problem again. If you had the world's
fastest supercomputer and 23,000 years to wait around, this method would work
great with any Julia Set. Well, most Julia Sets, anyway. A few of them would
take 23,000,000 years, a few 23,000,000,000 years, and a very few even longer.
Whether it takes 23 seconds or 23,000,000,000 years to draw the Julia Set
depends on the parameters you choose. For some parameters, the technique
works very nicely. For some parameters, in fact, this technique works better
than the usual technique for drawing Julia Sets.
(The usual technique is the technique that makes the colored pictures of Julia
Sets that you have likely seen if you even know what I'm talking about when I
say "Julia Sets".)
So to get much of a picture with this function, you need to pick the right
parameters. Generally speaking, 'a' and 'b' should be small in absolute
value. Say -2 < a < 1 and -1 < b < 1. To be more precise, c(=a+bi) should be
in the Mandelbrot Set. (See Function E above for how to load parameters
directly from the Mandelbrot Set.) Even within this range, however, certain
parameters will work better. If you're not in this range, though, you haven't
got a prayer (of getting a picture of a nice Julia Set, that is. Who knows
but what you might discover something else interesting?)
You will want to use the <I> command to set a high number of iterations when
you use Function F. 10000 iterations is a good place to start; you will need
more if your Julia Set is more complicated or if you're zooming in on a small
area.
The Julia Sets that you draw with Function F are fractals.
* * *
By the way, you might be wondering why there is the '±' in this function
(z=±sqrt(z-c)). It is because we are taking the inverse of z²+c (see
Function E). The inverse of this includes a sqrt function (set z'=z²+c and
solve for z; you'll soon see why). But actually there are TWO square roots
for each number--a fact which is all too easy to forget. For instance, 10 is
the square root of 100, but so is -10!
So if you start with a number and take its inverse under Function E, you
actually come up with TWO inverses (a positive and negative square root). If
you iterate this inverse function (which is what we're interested in doing),
that means that you take the inverse again. So we take the inverse of both of
the two inverses. Each inverse again has two inverses, so now we have a total
of four points. Doing it again, we find that each of our four points has two
inverses, so now we have eight points. Doing it a fourth time, we end up with
16 points. A fifth time, and we have 32 points, and so on.
As you can see, the number of points grows exponentially! If we were to try
to keep track of all of these inverse points, our computer's memory would soon
be full. After only 20 iterations, we would have 1,048,576 points--many more
than a PC's memory will hold. After 30 iterations, we have 1,073,741,824
points--enough to fill a 3,000 megabyte hard drive.
So it's simply impossible to keep track of all these inverses. What we do
instead is much simpler. We just more or less randomly* pick one of the two
inverses each time. That way, after 30 iterations, we only have one point to
keep track of instead of 1 billion. And believe it or not, the result looks
pretty much the same.
So that is why there's a '±' in this function--because at each step, we're
picking the positive or the negative square root.
* * *
*Actually instead of just randomly picking an inverse, I have come up with
some tricky algorithms that greatly help out in drawing a Julia Set. These
algorithms are controlled by parameters 'c' and 'd'.
1. Closest Point Algorithm. This algorithm chooses the square root that is
closest to the point. This is good for drawing spirals around repelling and
attracting periodic points--a common feature in Julia Sets.
The parameter 'c' determines how often this alogorithm will be used.
Parameter 'c' ranges from 0 (never use the algorithm) to 1 (always use the
algorithm).
Parameter 'c' usually should not be set to 1, because this algorithm has a
tendency to get stuck on an attracting point if it is used alone. Typical
good values for 'c' are 0.7-0.9.
2. Point Not Plotted Algorithm. This algorithm checks both inverses to see
if they've been plotted on the graphics screen. If one of the points hasn't
been plotted, the algorithm is more likely to pick that point. (How much more
likely is determined by parameter 'd'.) If both points have been plotted, or
neither has been plotted, the algorithm picks randomly between the two.
Parameter 'c' also determines how often this algorithm will be used. Whenever
Alogorithm 1 isn't used, this one is used. So
c= means
─────── ──────────────────────────────────────────────────────
0 Algorithm 1 never used; Algorithm 2 always used.
1 Algorithm 1 always used; Algorithm 2 never used.
0.5 Each algorithm used 50% of the time
0.75 Algorithm 1 used 75% of the time, Algorithm 2 used 25%
Parameter 'd' determines how likely Algorithm 2 is to choose the point that
hasn't been plotted. Parameter d=1 means always choose the point that hasn't
been plotted; parameter d=0 means always choose randomly.
Setting parameters 'c' and 'd' when using Function E also has this same
effect, if you use the <J> command to draw a Julia Set.
Function G. x=y y=a*y*(1-b*x-(1-b)*y) (logistic delay equation)
──────────────────────────────────────────────────────────────────────────────
I found this on page 71 of a book by Lauwerier. Again, I've lost the book and
the title. Oh, well.
This function is closely related to the (one-dimensional) function f(x)=1-ax².
This function was a seminal function in the discovery of chaos theory. For
certain values of 'a', the function is totally chaotic. Mathematicians were
astounded that such unpredictable results could come from such a simple
function (the function is in fact an overly simplistic model for population
growth).
Believe it or not, today the dynamics of f(x)=1-ax² have been completely
categorized. That is to say, mathematicians thoroughly understand this
chaotic function and all of its complicated dynamics.
This function (f(x)=1-ax²) is rather thoroughly covered in most books on chaos
theory (just in case you're wanting to know more).
Incidentally, our Function E (z=z²+c) is nothing but f(x)=1-ax² translated to
complex numbers. (Although an exact translation would be z=1-cz², it turns out
that z=z²+c and z=1-cz² both have the exact same dynamics if you make certain
adjustments in the value of 'c'.)
Function G is closely related to this function (f(x)=1-ax²), as you can see if
you examine them both closely. In fact, you could sort of think of Function G
as a two-dimensional analog of f(x)=1-ax².
The fact that it's a "logistic" equation means that it is in some way supposed
to model population growth.
Function H. x=y y=a*x+b*y+c*x²+d*x*y+mu*y²
──────────────────────────────────────────────────────────────────────────────
Function H again comes from Lauwerier, this time page 72. This is another
variation on the logistic theme, closely related to Function G and
f(x)=1-ax².
Function I. x=a*x(1-x-y) y=b*x*y (predator-prey model)
──────────────────────────────────────────────────────────────────────────────
This comes from Lauwerier page 74. This is a very simple predator-prey model.
Think of the x axis representing the population of rabbits, and the y axis the
population of coyotes, and you'll have the basic idea.
Function I displays chaotic motion for many parameters, which raises the
question of whether chaotic motion can actually be observed in natural
relationships of this sort.
Usually in population models like this one, 1 represents the maximum
population and 0 the minimum. You can think of it ranging from 100% down to
0% or something like that.
You can see a close relationship between this map and our logistic maps
(Functions G and H) as well as with the Henon Map (all are in fact population
models).
Function J. x=a*x*exp((1-sqrt(1+y))/b) y=a*x-a*x*exp((1-sqrt(1+y))/b)
──────────────────────────────────────────────────────────────────────────────
Function J is a model of parasitoid-host interaction. Think of the x axis as
representing the population of the parasite, and the y axis the population of
the host (or perhaps it's the other way around--I'll let you figure that one
out).
As you can see, the behavior of this function is strikingly different than
that of Function I. This leads you to wonder how a species' relationship with
its environment might affect its long term population growth (is it for
instance chaotic, leading eventually to extinction, or chaotic, but within a
closely prescribed area, or stable--perhaps heading towards an attracting
fixed point, as this function seems to do).
This model comes from Lauwerier, pages 75-76.
Function K. x=y y=-x+2*a*y+4*(1-a)*y²/(1+y²) (area preserving)
──────────────────────────────────────────────────────────────────────────────
This function also is area preserving. I'm not quite sure where I lifted this
one from.
Function L. x=a*x*(1-2*y)+y y=b*y*(1-y) (horseshoe map; try a=1/3, b=4)
──────────────────────────────────────────────────────────────────────────────
This is the famous horseshoe map. As soon as you run it with the recommended
parameters, you'll know how it got its name.
This could also be called the "Taffy Pulling Map" for reasons that I'll let
you figure out.
You'll want to try a high number of iterations with this one--maybe 30,000
iterations or more.
An interesting thing to do with this map is to zoom in on one side of the
horseshoe. You'll see that as you get closer, the sides resolve into ever
more delicate filaments. In fact, each filament appears to be razor thin--
the sides only get their thickness from the bundles of filaments that make
them up. But as you zoom in on a filament, you discover that it isn't razor
thin after all--it is made up of even thinner filaments, which are themselves
made up of thinner filaments, and so on.
There is a certain pattern to the filaments, which reminds one of the spectral
lines in which are observed in spectroscopy. As you look at them closer and
closer, they delicately divide and divide again, and it seems that there's no
end to this dividing (in this particular case, there isn't).
This pattern of infinitely dividing filaments is also reminiscent of the
divisions in a Cantor Set*, as well as the bifurcation diagram for the
logistic equation f(x)=1-ax². (See the literature for more info on
bifurcation diagrams.)
This pattern of ever-dividing filaments turns out to be a fractal.
* * *
*The Cantor Set is made by starting with a line segment (say the segment
between 0 and 1). You remove the middle third of the segment (the part from
1/3 to 2/3). You then have two line segments. Remove the middle thirds of
each of these (I won't try to get into the fractions that this entails, but
you can if you want to). Now you have four line segments. Remove the middle
thirds of these line segments. Now continue this pattern an infinite number
of times and what you have left is the Cantor Set.
Here is a representation of the first five steps:
0 1/3 2/3 1
1 ---------------------------------------------------------------------------
2 --------------------------- ----------------------------
3 --------- --------- --------- ---------
4 --- --- --- --- --- --- --- ---
5 - - - - - - - - - - - - - - - -
The Cantor Set has this amusing property: It has an infinite number of points
(in fact, an uncountably infinite number of points) within a finite length,
yet at the same time the set itself has NO LENGTH.
You can see in the definition of the Cantor Set the same idea of iteration
that is the basis of Iterate!.
The Cantor Set is also a fractal. You will see some relationship between
it and the earlier fractal I attempted to draw, of a circle surrounded by
four circles, each surrounded by four circles, etc.
By the way, the one dimensional logistic map f(x)=1-ax² has a strong
relationship with the Cantor Set. For certain values of 'a', the set of all
points that don't go to infinity when iterated under f(x) form a Cantor Set.
You can think of it this way: The points that go to infinity form the "holes"
in the Cantor Set. The points that are left don't go to infinity, and they
form a Cantor Set. You can find out more about this in the literature.
While we're on the subject of wierd relationships, the Horseshoe Map has a
strong relationship with the one-dimensional logistic map f(x)=1-ax² (the same
map we've just been mentioning in regard to the Cantor Set). If you take a
certain cross-section of the Horseshoe Map it is the same as the logistic map
(both functions with the right parameters, of course, and "the same"
meaning topologically the same). We could say the Horseshoe Map "contains"
the logistic map, since a small part of the Horseshoe Map (a cross-section)
mimics the logistic equation.
We've already mentioned that the Henon Map "contains" the Horseshoe Map in the
same sort of way. So we have the logistic map contained in the Horseshoe Map
and the Horseshoe Map contained in the Henon Map . . . interesting.
And let us not forget that the logistic map contains a Cantor Set--so the
Horseshoe Map and the Henon Map will produce Cantor Sets as well. Perhaps
it's no accident that I saw Cantor Sets in those filaments . . .
But don't take my word for it. Try it yourself.
Function M. x=1-y+abs(x) y=x
──────────────────────────────────────────────────────────────────────────────
This map I stole from FractInt (I just read the FractInt documentation, and
they admitted to lifting it from "The Science of Fractal Images" page 149, so
now I feel better). They call it the Gingerbread Man. You will again notice
the amazingly complex graphs coming from a very simple formula.
Abs(x), by the way, means the absolute value of x.
Function U. User Function
──────────────────────────────────────────────────────────────────────────────
The U in this function stands for "you". (Is this from the Mousketeers or
something? Oh, well.) The most best thing about Iterate! isn't finding out
about the functions that somebody else has discovered, but trying to invent
some interesting functions yourself.
Good luck.
(Remember to send your interesting functions in. I will distribute the most
interesting ones with Iterate! in the future.)
(ver 3.11, 9/93)